On increasing Markov process

نویسندگان

چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Quantum Markov Process on a Lattice

We develop a systematic description of Weyl and Fano operators on a lattice phase space. Introducing the so-called ghost variable even on an odd lattice, odd and even lattices can be treated in a symmetric way. The Wigner function is defined using these operators on the quantum phase space, which can be interpreted as a spin phase space. If we extend the space with a dichotomic variable, a posi...

متن کامل

A Markov Process on Cyclic Words

The TASEP (totally asymmetric simple exclusion process) studied here is a Markov chain on cyclic words over the alphabet {1, 2, . . . , n} given by at each time step sorting an adjacent pair of letters chosen uniformly at random. For example, from the word 3124 one may go to 1324, 3124, 3124, 4123 by sorting the pair 31, 12, 24, or 43. Two words have the same type if they are permutations of ea...

متن کامل

Contracting Strategy Based on Markov Process Modeling

One of the fundamental activities in multiagent systems is the exchange of tasks among agents (Davis & Smith 1983). In particular, we are interested in contracts among selfinterested agents (Sandholm & Lesser 1995), where a contractor desires to find a contractee that will perform the task for the lowest payment, and a contractee wants to perform tasks that maximize its profit (payment received...

متن کامل

Quantile Markov Decision Process

In this paper, we consider the problem of optimizing the quantiles of the cumulative rewards of Markov Decision Processes (MDP), to which we refers as Quantile Markov Decision Processes (QMDP). Traditionally, the goal of a Markov Decision Process (MDP) is to maximize expected cumulative reward over a defined horizon (possibly to be infinite). In many applications, however, a decision maker may ...

متن کامل

Strictly Increasing Markov Chains as Wear Processes

To model the lifetime of a device, increasing Markov chains are used. The transition probabilities of the chain are as follows: pi, j = p if j = i+δ, and pi, j = 1− p if j = i+2δ. The mean time to failure of the device, namely the mean number of transitions required for the process, starting from x0, to take on a value greater than or equal to x0 + kδ is computed explicitly. A second version of...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Kyoto Journal of Mathematics

سال: 1962

ISSN: 2156-2261

DOI: 10.1215/kjm/1250524893